心血管疾病是世界各地最常见的死亡原因。为了检测和治疗心脏相关的疾病,需要连续血压(BP)监测以及许多其他参数。为此目的开发了几种侵入性和非侵入性方法。用于持续监测BP的医院中使用的大多数现有方法是侵入性的。相反,基于袖带的BP监测方法,可以预测收缩压(SBP)和舒张压(DBP),不能用于连续监测。几项研究试图从非侵​​入性可收集信号(例如光学肌谱(PPG)和心电图(ECG))预测BP,其可用于连续监测。在这项研究中,我们探讨了自动化器在PPG和ECG信号中预测BP的适用性。在12,000岁的MIMIC-II数据集中进行了调查,发现了一个非常浅的一维AutoEncoder可以提取相关功能,以预测与最先进的SBP和DBP在非常大的数据集上的性能。从模拟-II数据集的一部分的独立测试分别为SBP和DBP提供了2.333和0.713的MAE。在40个主题的外部数据集上,模型在MIMIC-II数据集上培训,分别为SBP和DBP提供2.728和1.166的MAE。对于这种情况来说,结果达到了英国高血压协会(BHS)A级并超越了目前文学的研究。
translated by 谷歌翻译
近年来,基于生理信号的认证表现出伟大的承诺,因为其固有的对抗伪造的鲁棒性。心电图(ECG)信号是最广泛研究的生物关像,也在这方面获得了最高的关注。已经证明,许多研究通过分析来自不同人的ECG信号,可以识别它们,可接受的准确性。在这项工作中,我们展示了EDITH,EDITH是一种基于深入的ECG生物识别认证系统的框架。此外,我们假设并证明暹罗架构可以在典型的距离指标上使用,以提高性能。我们使用4个常用的数据集进行了评估了伊迪丝,并使用少量节拍表现优于先前的工作。 Edith使用仅单一的心跳(精度为96-99.75%)进行竞争性,并且可以通过融合多个节拍(从3到6个节拍的100%精度)进一步提高。此外,所提出的暹罗架构管理以将身份验证等错误率(eer)降低至1.29%。具有现实世界实验数据的Edith的有限案例研究还表明其作为实际认证系统的潜力。
translated by 谷歌翻译
心血管疾病是死亡率最严重的原因之一,每年在世界各地遭受沉重的生命。对血压的持续监测似乎是最可行的选择,但这需要一个侵入性的过程,带来了几层复杂性。这激发了我们开发一种通过使用光杀解功能图(PPG)信号的非侵入性方法来预测连续动脉血压(ABP)波形的方法。此外,我们探索了深度学习的优势,因为它可以通过使手工制作的功能计算无关紧要,这将使我们无法坚持理想形状的PPG信号,这是现有方法的缺点。因此,我们提出了一种基于深度学习的方法PPG2ABP,该方法可以从输入PPG信号中预测连续的ABP波形,平均绝对误差为4.604 mmHg,可保留一致的形状,大小和相位。但是,PPG2ABP的更惊人的成功事实证明,来自预测的ABP波形的DBP,MAP和SBP的计算值超过了几个指标下的现有作品,尽管没有明确培训PPG2ABP。
translated by 谷歌翻译
System identification, also known as learning forward models, transfer functions, system dynamics, etc., has a long tradition both in science and engineering in different fields. Particularly, it is a recurring theme in Reinforcement Learning research, where forward models approximate the state transition function of a Markov Decision Process by learning a mapping function from current state and action to the next state. This problem is commonly defined as a Supervised Learning problem in a direct way. This common approach faces several difficulties due to the inherent complexities of the dynamics to learn, for example, delayed effects, high non-linearity, non-stationarity, partial observability and, more important, error accumulation when using bootstrapped predictions (predictions based on past predictions), over large time horizons. Here we explore the use of Reinforcement Learning in this problem. We elaborate on why and how this problem fits naturally and sound as a Reinforcement Learning problem, and present some experimental results that demonstrate RL is a promising technique to solve these kind of problems.
translated by 谷歌翻译
Detecting personal health mentions on social media is essential to complement existing health surveillance systems. However, annotating data for detecting health mentions at a large scale is a challenging task. This research employs a multitask learning framework to leverage available annotated data from a related task to improve the performance on the main task to detect personal health experiences mentioned in social media texts. Specifically, we focus on incorporating emotional information into our target task by using emotion detection as an auxiliary task. Our approach significantly improves a wide range of personal health mention detection tasks compared to a strong state-of-the-art baseline.
translated by 谷歌翻译
The health mention classification (HMC) task is the process of identifying and classifying mentions of health-related concepts in text. This can be useful for identifying and tracking the spread of diseases through social media posts. However, this is a non-trivial task. Here we build on recent studies suggesting that using emotional information may improve upon this task. Our study results in a framework for health mention classification that incorporates affective features. We present two methods, an intermediate task fine-tuning approach (implicit) and a multi-feature fusion approach (explicit) to incorporate emotions into our target task of HMC. We evaluated our approach on 5 HMC-related datasets from different social media platforms including three from Twitter, one from Reddit and another from a combination of social media sources. Extensive experiments demonstrate that our approach results in statistically significant performance gains on HMC tasks. By using the multi-feature fusion approach, we achieve at least a 3% improvement in F1 score over BERT baselines across all datasets. We also show that considering only negative emotions does not significantly affect performance on the HMC task. Additionally, our results indicate that HMC models infused with emotional knowledge are an effective alternative, especially when other HMC datasets are unavailable for domain-specific fine-tuning. The source code for our models is freely available at https://github.com/tahirlanre/Emotion_PHM.
translated by 谷歌翻译
Wireless Sensor Network (WSN) applications reshape the trend of warehouse monitoring systems allowing them to track and locate massive numbers of logistic entities in real-time. To support the tasks, classic Radio Frequency (RF)-based localization approaches (e.g. triangulation and trilateration) confront challenges due to multi-path fading and signal loss in noisy warehouse environment. In this paper, we investigate machine learning methods using a new grid-based WSN platform called Sensor Floor that can overcome the issues. Sensor Floor consists of 345 nodes installed across the floor of our logistic research hall with dual-band RF and Inertial Measurement Unit (IMU) sensors. Our goal is to localize all logistic entities, for this study we use a mobile robot. We record distributed sensing measurements of Received Signal Strength Indicator (RSSI) and IMU values as the dataset and position tracking from Vicon system as the ground truth. The asynchronous collected data is pre-processed and trained using Random Forest and Convolutional Neural Network (CNN). The CNN model with regularization outperforms the Random Forest in terms of localization accuracy with aproximate 15 cm. Moreover, the CNN architecture can be configured flexibly depending on the scenario in the warehouse. The hardware, software and the CNN architecture of the Sensor Floor are open-source under https://github.com/FLW-TUDO/sensorfloor.
translated by 谷歌翻译
Generative models have been very successful over the years and have received significant attention for synthetic data generation. As deep learning models are getting more and more complex, they require large amounts of data to perform accurately. In medical image analysis, such generative models play a crucial role as the available data is limited due to challenges related to data privacy, lack of data diversity, or uneven data distributions. In this paper, we present a method to generate brain tumor MRI images using generative adversarial networks. We have utilized StyleGAN2 with ADA methodology to generate high-quality brain MRI with tumors while using a significantly smaller amount of training data when compared to the existing approaches. We use three pre-trained models for transfer learning. Results demonstrate that the proposed method can learn the distributions of brain tumors. Furthermore, the model can generate high-quality synthetic brain MRI with a tumor that can limit the small sample size issues. The approach can addresses the limited data availability by generating realistic-looking brain MRI with tumors. The code is available at: ~\url{https://github.com/rizwanqureshi123/Brain-Tumor-Synthetic-Data}.
translated by 谷歌翻译
Spatial perception is a key task in several robotics applications. In general, it involves the nonlinear estimation of hidden variables that represent the state of the robot/environment. However, in the presence of outliers the standard nonlinear least squared formulation results in poor estimates. Several methods have been considered in the literature to improve the reliability of the estimation process. Most methods are based on heuristics since guaranteed global robust estimation is not generally practical due to high computational costs. Recently general purpose robust estimation heuristics have been proposed that leverage existing non-minimal solvers available for the outlier-free formulations without the need for an initial guess. In this work, we propose two similar heuristics backed by Bayesian theory. We evaluate these heuristics in practical scenarios to demonstrate their merits in different applications including 3D point cloud registration, mesh registration and pose graph optimization.
translated by 谷歌翻译
Machine learning algorithms typically assume that the training and test samples come from the same distributions, i.e., in-distribution. However, in open-world scenarios, streaming big data can be Out-Of-Distribution (OOD), rendering these algorithms ineffective. Prior solutions to the OOD challenge seek to identify invariant features across different training domains. The underlying assumption is that these invariant features should also work reasonably well in the unlabeled target domain. By contrast, this work is interested in the domain-specific features that include both invariant features and features unique to the target domain. We propose a simple yet effective approach that relies on correlations in general regardless of whether the features are invariant or not. Our approach uses the most confidently predicted samples identified by an OOD base model (teacher model) to train a new model (student model) that effectively adapts to the target domain. Empirical evaluations on benchmark datasets show that the performance is improved over the SOTA by ~10-20%
translated by 谷歌翻译